![]() METHOD FOR EXTRACTING MORPHOLOGICAL CHARACTERISTICS FROM A SAMPLE OF BIOLOGICAL MATERIAL
专利摘要:
The present invention relates to a method for extracting morphological characteristics from a sample of biological material, in particular fingerprints, in particular internal or external, using a coherent optical tomography acquisition system delivering a signal representative of the sample, method in which an image containing intensity information and an image containing phase information are formed from at least the signal supplied by the acquisition system and representative of the sample, in order to extract morphological characteristics from the 'sample. 公开号:FR3041423A1 申请号:FR1558929 申请日:2015-09-22 公开日:2017-03-24 发明作者:Francois Lamare;Yaneck Gottesman;Bernadette Dorizzi 申请人:Institut Mines Telecom IMT; IPC主号:
专利说明:
avec par exempleet(x, y) les coordonnées d’un pixel, C;(x,y) la valeur de qualité du pixel (x, y) de l’image I, avec 0 < C/(x,y) < 1, CP(x,y) la valeur de qualité du pixel (x, y) de l’image P, avec 0 < Cp{x,y) < 1, et Norm = C^x.y) + CP(x,y). Si Norm = 0, on fixe de préférence ocj = aP = 0.5. Selon la formule de fusion utilisée, les valeurs aI et aP peuvent être exprimées différemment, l’invention n’est pas limitée à un calcul particulier pour les valeurs a.j et aP. Dans une variante, l’image de fusion entre l’image contenant des informations d’intensité et l’image contenant des informations de phase est avantageusement formée en retenant, pour chaque pixel, le pixel de l’image ayant la valeur de qualité la plus élevée : F( λλ = {Kx,y)si Cj(x,y) > Cp{x,y) lP(x,y)si Cj{x,y) < CP(x,y)' L’image de fusion entre l’image contenant des informations d’intensité et l’image contenant des informations de phase est ainsi avantageusement formée pixel par pixel, en fonction du voisinage de chaque pixel, grâce aux cartes de confiance. Dans le cas où l’image considérée est une empreinte digitale, la valeur de qualité d’un pixel, (CP(x,y) ou Cfx,y)), peut être obtenue à partir de cartes de fiabilité des champs d’orientation des sillons de l’empreinte, ou « orientation field reliability map» en anglais, comme décrit dans les articles de J. Zhou et J. Gu, “A Model-based for the computation of fmgerprint’s orientation field IEEE Transactions on Image Processing, vol. 13, no. 6, 2004, et de M. S. Khalil, “Deducting fingerprint singular points using orientation field reliability”, First conférence on robot, vision and signal processing, pp. 234-286, 2011. Les champs d’orientation des sillons représentent la direction des sillons pour chaque position dans l’empreinte. Ils sont calculés pour chaque pixel de l’image d’empreinte digitale, en fonction de leur voisinage. L’utilisation de tels champs d’orientation en biométrie des empreintes digitales est connue, par exemple dans des méthodes d’amélioration d’images d’empreintes digitales telles que celle décrite dans l’article de L. Hong et al. “Fingerprint image enhancement: algorithm and performance évaluation", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.20, no.8, 1998. Les cartes de fiabilité des champs d’orientation permettent d’évaluer la validité et la fiabilité de l’estimation de l’orientation des sillons. Une région d’image d’empreinte digitale de faible qualité peut être caractérisée par le fait que la texture des sillons n’est pas apparente, la structure périodique caractéristique des sillons n’y étant pas retrouvée. Dans de telles régions, l’estimation de l’orientation est mauvaise car il n’y a pas d’orientation prépondérante. Par conséquent, la valeur de la fiabilité est faible. Inversement, dans des régions d’empreintes très structurées, la présence d’une direction particulière peut être estimée de manière fiable. La valeur de la fiabilité pour ces régions est élevée. Comme expliqué dans les articles de C. Sousedik et al. “Volumétrie Fingerprint Data Analysis using Optical Cohérence Tomography", BIOSIG Conférence, 2013, pp. 1-6, et de C. Sousedik et C. Bush, “ Quality of fingerprint scans captured using Optical Cohérence Tomography”, IJCB Conférence, 2014, pp. 1-8, la structure de l’empreinte interne peut être assez inhomogène, contrairement à celle de l’empreinte externe, qui est bien continue, conduisant à une ambigüité sur la position du deuxième pic de réflectivité maximale. La structure de l’empreinte interne peut également varier grandement d’un individu à l’autre. Détecter la position de l’empreinte interne par des mesures de temps de vol peut être délicat, dans la mesure où l’interface n’est pas obligatoirement bien définie. En outre, la rétrodiffusion de la lumière dans la peau fait intervenir des phénomènes physiques complexes et difficiles à prédire, liés aux interférences entre les ondes multiples rétrodiffusées par les structures biologiques. Il n’est pas évident que, dans une empreinte digitale, les sommets des sillons correspondent à des maxima de réflectivité et les creux à des minima, ou vice versa. La fusion des images en phase et en intensité permet de tirer le meilleur profit des informations disponibles sur l’ensemble des deux images, et ainsi d’améliorer de façon importante la qualité de l’image finale obtenue sur la surface recherchée. Par exemple dans le secteur de la biométrie, il en découle une amélioration importante des performances d’identification basée sur l’empreinte sous-cutanée, en utilisant les algorithmes d’identification biométriques connus. Précision de localisation La précision de localisation des pics de réflectivité maximale conditionne en partie la qualité des images 3D et 2D en phase. La précision de localisation, différente de la résolution axiale, est une notion peu connue de l’art antérieur, et peu exploitée dans les applications biomédicales. La résolution axiale correspond à la distance minimale nécessaire entre deux centres de diffusion afin de pouvoir les distinguer correctement, et est seulement dépendante de la largeur spectrale de la source lumineuse. Elle peut être mesurée à partir de la largeur à mi-hauteur d’un pic associé à un unique centre de diffusion, par exemple le premier pic numéroté 1. La précision de localisation est avantageusement liée à l’erreur de localisation du maxima de l’enveloppe des différents profils « A-scan ». Afin d’évaluer la précision de localisation, une étude statistique est réalisée, consistant à simuler le pic associé à un unique centre de diffusion, dont la position est fixée lors de la simulation, en prenant en compte les différentes contributions de bruit du photo-détecteur du système d’acquisition, principalement le bruit thermique et le bruit de grenaille qui ont des distributions assimilables à un bruit blanc. Ce bruit peut avoir un impact plus ou moins important, selon sa puissance, sur la position mesurée du pic maximum. L’erreur faite sur la position peut être évaluée par la différence entre la position du maximum du profil « A-scan » bruité simulé et celle du profil « A-scan » de référence utilisé, connue au préalable. On définit alors la précision de localisation du système d’acquisition par l’écart type de cette erreur de localisation. L’écart type est avantageusement obtenu à partir d’un grand nombre de tirages aléatoires de profils « A-scan » bruités. Dispositif Selon un autre de ses aspects, l’invention concerne un dispositif d’extraction de caractéristiques morphologiques d’un échantillon de matériel biologique, en particulier des empreintes digitales, notamment internes ou externes, comprenant un système d’acquisition par tomographie optique cohérente délivrant un signal représentatif de l’échantillon, le dispositif étant configuré pour former, à partir au moins du signal délivré par le système d’acquisition et représentatif de l’échantillon, une image contenant des informations d’intensité et une image contenant des informations de phase, afin d’extraire des caractéristiques morphologiques de l’échantillon. Dans un mode de réalisation préféré de l’invention, le dispositif est configuré en outre pour fusionner l’image contenant des informations d’intensité et l’image contenant des informations de phase afin de former une seule image. Les caractéristiques décrites précédemment pour le procédé selon l'invention s’appliquent au dispositif. Le champ de vue du dispositif, correspondant à l’étendue spatiale maximale dans le plan X-Y pouvant être enregistrée, peut être étendu, atteignant par exemple 2 mm par 2 mm, soit 4 mm2. Cela permet d’obtenir un nombre important de minuties, dans le cas de l’extraction d’une empreinte digitale. L’invention pourra être mieux comprise à la lecture de la description détaillée qui va suivre, d’exemples non limitatifs de mise en œuvre de celle-ci, et à l’examen du dessin annexé, sur lequel : - la figure 1, précédemment décrite, représente une image volumique obtenue par un système d’acquisition par tomographie optique cohérente sur un doigt, - les figures 2(a) et 2(b), précédemment décrites, représentent, respectivement, l’image en intensité et l’image traitée obtenues, selon l’art antérieur, à partir du volume de la figure 1, - les figures 3(a) et 3(b), précédemment décrites, représentent, respectivement, l’acquisition d’une empreinte digitale par tomographie et le profil « A-scan » obtenu en fonction du temps de vol de la lumière, - la figure 4, précédemment décrite, représente l’intensité d’un profil « A-scan » en fonction de la profondeur, - la figure 5, précédemment décrite, illustre la présence de gouttes d’eau à la surface d’un doigt, - la figure 6, précédemment décrite, représente l’image en intensité du doigt humide de la figure 5, obtenue par OCT selon l’art antérieur, - la figure 7, précédemment décrite, représente la coupe transversale d’un volume tomographique obtenu selon l’art antérieur, - la figure 8, précédemment décrite, représente l’empreinte digitale interne 3D obtenue à partir du volume de la figure 7 obtenu selon une méthode l’art antérieur, - la figure 9, précédemment décrite, illustre l’enveloppe moyenne de la surface d’un doigt, - la figure 10 représente un dispositif OCT selon l’invention, - la figure 1 l(a) représente l’image en phase et la figure 1 l(b) représente l’image en intensité de l’empreinte interne, obtenues par la mise en œuvre du procédé selon l’invention, à partir du volume tomographique de la figure 1, - les figures 12(a) et 12(b) représentent, respectivement, l’image en phase et l’image traitée de l’empreinte interne, obtenues, selon l’invention, à partir du volume tomographique de la figure 1, - la figure 13 illustre une comparaison entre deux images en phase obtenues selon l’invention, - les figures 14(a) et 14(b) représentent, respectivement, l’image fusionnée des informations en phase et en intensité, et l’image traitée, obtenues selon l’invention, - la figure 15 représente des empreintes digitales internes, les minuties associées, et la carte de fiabilité de l’orientation des sillons pour les images en phase, en intensité et pour la fusion des deux, obtenues selon l’invention, - la figure 16 est un graphe présentant des courbes de performance obtenues par la mise en œuvre du procédé selon l’invention, - la figure 17 est un graphe représentant les densités de probabilité de scores client et de scores imposteur, utilisant une base de données d’images d’empreintes internes extraites selon l’invention, - la figure 18 représente une image en intensité d’une empreinte digitale dans le cas d’un doigt humidifié, - les figures 19(a) et 19(b) représentent, respectivement, une image après fusion d’une empreinte digitale dans le cas du doigt humidifié de la figure 17, obtenue selon l’invention, et l’image en phase correspondante, - la figure 20 est un graphe représentant l’erreur de localisation selon l’invention en fonction du rapport signal à bruit et de la résolution axiale, - la figure 21 est un graphe présentant des courbes de performance comparatives, et - la figure 22 illustre une comparaison d’images obtenues à partir d’un doigt humide, avec des capteurs selon l’art antérieur et selon l’invention. Un dispositif OCT 10 permettant de mettre en œuvre l’invention est représenté à la figure 10. Ce dispositif 10 comporte une source à balayage 11 configurée pour balayer l’échantillon à différentes profondeurs, un miroir 12, un miroir partiel 13, et un interféromètre de Michelson 14. Chaque balayage de longueur d’onde, ou « A-scan » en anglais, produit des franges d’interférence à partir des réflexions de l’échantillon à différentes profondeurs. Comme décrit précédemment, une image 3D en phase de l’empreinte interne est obtenue, selon l’invention, à partir du volume tomographique de la figure 1, et est représentée à la figure 11 (a). L’image en intensité de la même empreinte interne, représentée à la figure ll(b), présente des zones inexploitables à très faible contraste. Ces zones sont aléatoires car elles dépendent entre autres des propriétés de diffusion locales du tissu biologique mais aussi de l’angle d’incidence de la sonde du système d’acquisition par tomographie optique cohérente, notamment dans le cas d’une mesure sans contact où la mesure est non reproductible. La figure 12(a) représente une image en phase brute, la figure 12(b) représentant l’image correspondante délivrée à la sortie du « matcher ». Ces images sont à comparer aux images en intensité présentées à la figure 2, précédemment décrite. Les positions des zones inexploitables de l’image de la figure 12(b) sont différentes de celle de la figure 2(b). Ainsi, utiliser à la fois les caractéristiques extraites de l’image en intensité et celles de l’image en phase permet d’améliorer l’identification de la personne correspondant à cette empreinte digitale. La figure 13(a) représente une image 3D de l’empreinte externe sur laquelle a été projetée l’information de la phase, obtenue à partir du volume tomographique de la figure 1, selon l’invention. Les fortes valeurs, représentées en blanc, correspondent à un temps de vol court entre la sonde du système d’acquisition OCT et la surface de l’empreinte, et les faibles valeurs d’intensité, représentées en noir, correspondent à un temps de vol plus long. Cet exemple ne permet pas d’obtenir directement des images d’empreintes de bonne qualité, dans la mesure où l’on ne peut discerner convenablement les sillons. Cela est dû au fait que la référence pour mesurer le temps de vol, c’est-à-dire la sonde du système d’acquisition OCT, n’est pas située à une distance égale pour tous les points de la surface du doigt. Afin d’obtenir un meilleur contraste, comme décrit précédemment, l’enveloppe moyenne de la surface du doigt est prise comme référence pour le temps de vol. Comme visible à la figure 13(b), représentant l’empreinte 3D sur laquelle a été projetée l’information en delta-phase, c’est-à-dire les variations de la phase, correspondant à l’information pertinente pour avoir des images d’empreintes bien contrastées, les sillons sont dans ce cas bien visibles. Comme décrit précédemment, l’image contenant des informations d’intensité et l’image contenant des informations de phase sont fusionnées pour former une seule image, en utilisant les cartes de confiance de chaque image délivrant des valeurs de qualité pixel par pixel. Une image formée par la fusion de l’image en intensité de la figure 2(a) et de l’image en phase de la figure 12(a) est visible à la figure 14(a), l’image correspondant à la sortie du « matcher » étant représentée à la figure 14(b). Grâce à la fusion, l’image résultante est de bien meilleure qualité, les zones inexploitables ayant quasiment disparu. La figure 15 représente des empreintes digitales internes, les minuties associées, et la carte de fiabilité de l’orientation des sillons pour les images en phase, en intensité et pour la fusion des deux. Les images complémentaires ont été considérées afin d’utiliser la représentation connue des empreintes digitales. Les images sur la première ligne correspondent aux images d’empreinte interne aplatie, dans les trois représentations. Les images de la deuxième ligne représentent les mêmes images après des étapes de prétraitements et de binarisation, le logiciel Verifinger, développé par Neurotechnology, ayant été utilisé dans l’exemple décrit. Sur ces images, les minuties extraites à partir de l’image binarisée, représentées par les points noirs, exploitées par les « matchers », sont utilisées pour l’étape d’identification par la mise en correspondance des minuties de deux images d’empreinte digitale. Pour les deux représentations en phase et en intensité, la qualité d’image est médiocre pour certaines régions, comme représenté par des cercles noirs. Pour de telles régions, les sillons des empreintes digitales ne sont pas visibles. Par conséquent, la qualité de ces régions n’est pas suffisante pour assurer une détection de minuties correcte, comme illustré par les trous blancs dans les images binarisées. Dans les représentations des cartes de fiabilité des champs d’orientations des sillons, les pixels sombres correspondent aux valeurs faibles de fiabilité tandis que les pixels clairs correspondent aux valeurs élevées. Dans les représentations en intensité et en phase, de faibles valeurs de fiabilité sont associées aux zones de mauvaise qualité. Il est à noter que, de préférence et dans l’exemple décrit, les régions problématiques ne sont pas localisées au même endroit dans les deux représentations. Comme visible sur la dernière colonne de la figure 15, l’image d’empreinte digitale interne après la fusion des images en intensité et en phase est de bien meilleure qualité, cette image ayant été reconstruite en choisissant les meilleures régions des deux représentations. La structure des sillons est mieux préservée sur l’ensemble de l’image. Les régions avec des trous dans l’image binarisée ont disparu, ce qui amène à une détection de minuties plus robuste. La carte de fiabilité pour l’image après fusion illustre bien l’amélioration de la qualité globale de l’image, les zones claires étant plus nombreuses et plus étendues. La figure 16 représente une comparaison des performances obtenues pour des résultats provenant des différentes représentations, sur une base comprenant une centaine de doigts, en termes de taux de fausse acceptation FAR en fonction du taux de faux rejets FRR. Ces courbes de compromis d’erreur de détection (DET pour « détection error tradeoff » en anglais), donnant le taux de fausse acceptation en fonction du taux de faux rejets, sont connues pour évaluer les performances des systèmes biométriques. Plus ces courbes sont situées vers le bas, meilleures sont les performances, un taux de faux rejets minimum étant recherché pour un taux de fausse acceptation donné. La courbe en petits pointillés correspond à la courbe de référence, obtenue avec une image en phase de l’empreinte externe, cette empreinte étant par nature aisément accessible aux différents capteurs. La courbe en pointillés larges et la courbe en pointillés discontinus correspondent respectivement aux courbes des empreintes internes extraites par les images en intensité et en phase, et sont environ au même niveau. Pour un taux de fausse acceptation de 10'3 par exemple, le taux de faux rejets est dégradé d’un facteur 2-3 par rapport au taux de faux rejets associé à la courbe de référence. Ce résultat témoigne de la difficulté d’accéder à l’empreinte interne. La courbe en traits pleins est calculée à partir des images après fusion. Pour ce même taux de fausse acceptation, le taux de faux rejets est diminué d’un facteur d’environ 3-4 par rapport à celui associé aux courbes correspondant aux images d’empreinte interne en phase et en intensité. Dans un autre exemple, pour un taux de fausse acceptation de 0.01%, le taux de faux rejets est d’environ 7% pour les images après fusion, contre 26% pour les images en phase et 20% pour les images en intensité. Pour un taux de fausse acceptation de 0.1%, le taux de faux rejets est d’environ 4% pour les images après fusion, contre 20% pour les images en phase et 14% pour les images en intensité. Il est à noter en outre que les performances sont meilleures avec les images d’empreinte interne après fusion qu’avec les images d’empreinte externe en phase, les empreintes internes étant mieux préservées que les empreintes externes. Des densités de probabilité de scores client et imposteur sont représentées à la figure 17, obtenues sur une base de données d’images d’empreintes internes extraites selon l’invention, comportant 102 doigts différents issus de 15 individus, chaque doigt ayant été acquis 4 fois. Les images d’empreintes internes dans les trois représentations, intensité, phase et après fusion, ont été extraites des volumes tomographiques. Pour les tests de vérification, chaque image d’empreinte interne a été comparée avec toutes les autres images de la base, conduisant à un total de 166056 comparaisons d’empreintes. La comparaison de deux images provenant d’un même doigt est appelée correspondance client, ou « genuine matching » en anglais, et la comparaison de deux images issues de doigts différents est appelé correspondance imposteur, ou « impostor matching ». Les scores de similarité sont calculés avec le logiciel NBIS (NIST Biométrie Image Software). Dans cet exemple, l’algorithme MINDTCT permet d’extraire les minuties d’une image d’empreinte et le « matcher » BOZORTH3 retourne le score de similarité entre deux images. Deux densités de probabilité de scores, la densité client et la densité imposteur, sont obtenues, la capacité à discerner ces densités permettant de quantifier les performances d’un système biométrique. La décision finale est prise en comparant le score de similarité obtenu à un seuil, choisi en fonction des densités de score et des performances souhaitées. Comme les densités client et imposteur se recouvrent, des erreurs de faux rejets ou de fausses acceptations sont faites lors de la prise de décision. Les performances en vérification sont finalement évaluées à l’aide des courbes de performance obtenues en faisant varier le seuil de correspondance. Les performances obtenues démontrent que l’empreinte interne permet une identification des personnes avec des performances comparables à celles obtenues par les lecteurs biométriques connus sur l’empreinte externe d’un doigt sec. L’identification des personnes présentant des doigts sales ou humides est également plus efficace que ce qui est possible en utilisant les systèmes biométriques connus. La figure 21 présente une comparaison des performances obtenues par l’utilisation de l’empreinte interne extraite par fusion selon l’invention à celles obtenues par l’utilisation de l’empreinte externe extraite par les capteurs selon l’art antérieur, par exemple un capteur 2D capacitif. Des FRR similaires sont obtenus pour un FAR fixe. Par extension, dans le cas de doigts humides, les performances obtenues par utilisation de l’empreinte interne extraite par fusion selon l’invention sont meilleures que celles obtenues par les capteurs selon l’art antérieur, par exemple un capteur 2D capacitif. En effet, les performances du capteur 2D capacitif dans le cas humide sont forcément moins bonnes que celles présentées dans le cas normal, illustrées par la courbe en pointillés de la figure 21. Les figures 18 et 19 montrent des empreintes digitales obtenues dans le cas de doigts humides. Comme visible à la figure 18, l’image en intensité présente des zones très peu contrastées au niveau des zones humides. Les images correspondantes en phase et après fusion, obtenues selon l’invention, sont représentées respectivement aux figures 19(b) et 19(a). L’image en phase est de meilleure qualité que l’image en intensité et directement exploitable, et l’image après fusion est de très bonne qualité, ne présentant presque pas de défauts pouvant nuire à l’identification de l’empreinte. Les figures 22(a) et 22(b) représentent les images d’empreintes digitales d’un doigt humide pour deux capteurs 2D connus, respectivement un capteur optique et un capteur capacitif. Des tâches noires dues à l’humidité excessive du doigt sont visibles dans les images. Ces tâches dégradent considérablement la qualité des images, et diminuent par conséquent les performances en authentification. Les images binarisées correspondantes montrent que les zones tâchées n’ont pas été reconnues dans l’empreinte digitale. En comparaison, l’image en phase obtenue selon l’invention, représentée à la figure 22(c), est de bien meilleure qualité. La figure 20 représente l’écart type de l’erreur de localisation en fonction du rapport signal à bruit SNR, défini comme le rapport entre le niveau d’intensité du pic et celui du bruit de fond, comme décrit précédemment en référence à la figure 4, pour différentes résolutions axiales, de 5 pm à 25 pm. Pour un rapport signal à bruit de 50 dB, correspondant à une valeur typique pour une rétrodiffusion à l’interface air/peau, l’erreur de localisation est estimée entre 60 nm et 350 nm. L’erreur de localisation est bien en dessous de la résolution axiale du système d’acquisition, évaluée à environ 10 pm dans l’exemple considéré. La précision de localisation est également bien en dessous de l’ordre de grandeur de la longueur d’onde de la source lumineuse utilisée, environ égale à 1300 nm. En considérant, selon le principe d’ergodicité, que les statistiques d’ensemble des profils « A-scan » simulés sont équivalentes aux statistiques spatiales, il apparaît que la contribution du bruit lors de l’extraction de la surface 3D des empreintes est négligeable par rapport à la profondeur moyenne d’un sillon, environ égale à 50 pm. L’invention permet ainsi de distinguer correctement par mesures de phase les creux et les sommets des sillons des empreintes digitales. En outre, même dans le cas de performances instrumentales plus faibles, c’est-à-dire pour une résolution axiale faible, il est encore possible d’extraire les sillons de l’empreinte digitale avec une grande précision. L’invention peut permettre de proposer un capteur biométrique OCT performant en imagerie, à plus bas coût que les capteurs connus. L’invention n’est pas limitée aux exemples qui viennent d’être décrits. L’identification d’empreintes digitales en 3D requiert des outils plus complexes à mettre en œuvre que les outils traditionnels de correspondance d’images 2D, comme décrit dans l’article de A. Kumar et C. Kwong, “Toward Contacless, Low-Cost and Acurrate 3D fingerprint Identification”, CVPR IEEE Conférence, 2013, pp. 3438-3443. Dans l’objectif de pouvoir réutiliser des outils déjà existants, les empreintes digitales 3D obtenues selon l’invention sont avantageusement transformées en des images 2D grâce à une méthode de correspondance de texture sur des surfaces 3D, proche de celle décrite dans l’article de G. Zigelman et al. “Texture mapping using surface flattening via multidimensional scaling", IEEE transactions on Visualization and Computer Graphics, vol. 8, no. 2, 2002. Cette méthode repose sur l’utilisation de l’algorithme de « FastMarching » en anglais, décrit dans l’article de R. Kimmel et J. A. Sethian, “Computing géodésie paths on manifolds' applied mathemathics, Vol. 95, pp. 8431-8435, 1998, et de l’algorithme MDS pour « Multidimensiononal scaling» en anglais. En particulier, pour l’aplatissement d’une empreinte digitale 3D, l’algorithme de « Fast Marching » est utilisé pour calculer les distances géodésiques à partir d’un maillage triangulaire de son enveloppe moyenne, c’est-à-dire la surface 3D de l’empreinte sans les sillons. L’algorithme de « Multidimensionnal Scaling » est appliqué pour transformer la surface 31) maillée en une image 2D, sous la contrainte de la minimisation des distorsions des distances géodésiques. Cela permet de préserver au mieux les distances entre les minuties, ce qui est particulièrement intéressant dans le contexte de la biométrie. Différentes images de textures peuvent être projetées sur cette surface 2D aplatie, par exemple l’image de textures en intensité I(x, y), l’image de textures en phase P(x, y), ou l’image de textures en fusion F(x, y). L’invention n’est toutefois pas limitée à un type particulier de méthode pour transformer les images 3D en images 2D. Outre le secteur de la biométrie, l’invention peut être utilisée dans l’étude et l’analyse morphologiques de matériels biologiques, notamment dans le domaine médical, par exemple pour l’imagerie médicale requérant l’étude de la morphologie de surfaces de matériels biologiques situés en profondeur sous la peau. L’invention peut être utilisée afin de détecter une autre technique de fraude, consistant à effacer l’empreinte digitale externe, ce qui rend inopérante toute technique d’authentification par l’empreinte externe. Si l’on cherche à détecter la fraude, plutôt qu’authentifier la personne, le fait de ne voir aucune empreinte externe mais de voir une empreinte interne permet de déclencher un indicateur de fraude éventuelle. Method of extracting morphological characteristics from a sample of biological material The present invention relates to a process for extracting morphological characteristics from biological materials, in particular fingerprints, in particular internal or external, using signals delivered by coherent optical tomography acquisition devices, in particular for biometrics. Coherent Optical Tomographic Imaging (OCT) is a powerful non-contact optical imaging technique currently used in the medical field. It is starting to be used for consumer applications, especially for biometric applications. The problems specific to this type of application are currently different and closely related to the study of surface properties delimited from three-dimensional native information. In the study of the morphology of the different layers of biological materials located under the skin, as represented in FIG. 1, the known methods exploit only the intensity information, in particular for the segmentation of intensity images in order to delimit surfaces. separating two separate biological materials. This segmentation is delicate, the intensity of the signal delivered by the OCT sensor being closely dependent on the tissues located above the tissue of interest. This creates a variability in the segmentation of the images used which hinders the extraction of the desired surface. In the field of biometrics, the images of the internal fingerprint located under the skin, at the "epidermis / dermis" interface obtained after segmentation, have non-exploitable zones with this type of capture and therefore do not always allow identification. people easy and reliable. However, the internal impressions are better preserved than the external impressions, not undergoing the same alterations in their surface properties as the latter, such as scars, stains, for example ink or dirt, or the variation surface moisture of the finger, especially due to perspiration or ambient humidity conditions. The internal fingerprint is therefore a very relevant biometric information because it is more stable over time and less dependent on environmental variations. It can also enable the authentication of a person whose external fingerprint is damaged. The internal fingerprint can also detect an attempted identity fraud. Indeed, a known method of fraud, difficult to detect by the known biometric sensors, consists in depositing on the finger of the usurper an overlayer where is imprinted in relief a fingerprint of another person. This overlay is difficult to detect, especially because under it is a real finger with oxygenated blood, and the temperature of the overlay is close to that of the surface of a real finger. Figure 1 shows a typical image obtained from a finger by coherent optical tomography, as shown in Figure 3 (a). The sensor of the OCT sensor is moved using two galvanometric mirrors along the X and Y axes, as shown in Figure 3 (a). For each position of the probe, a measurement obtained by interferometry is recorded, as described in the article by AF Fercher et al. "Optical Coherence Tomography - Principles and Applications", published in "Reports on Progress in Physics", 2003, No. 66, pages 239-303. This consists of a measurement of the backscattered intensity as a function of flight time, that is to say the time it takes for the light to pass through the different layers of the examined sample. The propagation distance from the probe can be found by multiplying the flight time by the speed of light. A reflectivity profile of the light in depth is then obtained, called "A-scan" and shown in Figure 3 (b). An example of a profile "A-scan" intensity, from a finger recorded by OCT, is shown in Figure 4. The signal of interest corresponding to the signal from the outer surface of the finger is between the first numbered peak 1 and the third peak numbered 3. Before the first peak numbered 1, only the background noise is visible. The peak numbered 1 corresponds to the air / skin interface, that is to say to the external fingerprint. This is the interface where the optical index jump is greatest, due to the inhomogeneity of the two media, inducing the higher amplitude of the peak. In case of attempted fraud, the numbered number 1 corresponds to the air / overlayer interface. After this peak, the signal intensity decreases overall. This decrease is due to the phenomena of absorption and diffusion as the penetration of light into the fabric or overlay, and thus during its penetration into the different layers of the skin or the overcoat. The detection of the peak intensity peak position makes it possible to locate the air / skin or air / overlay interface. By recovering the position of the maximum of each "A-scan" profile of the tomographic volume, corresponding to the flight time of the light between the probe and the external surface of the finger, it is possible to construct a three-dimensional surface, called 3D surface, associated to the external fingerprint, as seen in Figure 1. The level of the backscattered signal according to the spatial position is shown in this image. In order to selectively form the image of the internal cavity and isolate it from the rest of the volume, the known methods rely on a spatial filtering on the Z axis, making it possible to obtain an average level around a certain depth under the skin, giving the information of backscattered intensity, that is to say a mean level of reflectivity, corresponding in particular to the spatial neighborhood of the second main peak of each profile "A-scan" on the 3D surface for the internal footprint , corresponding to the intensity image, shown in Figure 2 (a). These methods are described in particular in the article by A. Bossen et al. "Internai Fingerprint identification with optical coherence tomography", published in "IEEE Photonics technology letters", Vol. 22, No. 7, 2010 and the article by Mr. Liu et al. "Biometry mapping of fingertip eccrine tassels with optical coherence tomography" published in "IEEE photonics technology letters", Vol. 22, No. 22, 2010. Figure 2 (b) shows the image processed by a software system, called "matcher" in English, to put the image in the form of binary data and to identify the minutiae of the imprint, the characteristic points of the imprint used for identification. Some areas where the footprint contrast is low appear in white. For the same finger, the position of these zones may vary depending on the experimental conditions, or the positioning of the finger relative to the sensor. The performances obtained from the intensity image may have areas that are difficult to exploit because of the quality of the images, as can be seen in Figure 2 (b). This intensity contrast for the external impression is also very variable depending on the state of the surface of the finger, if it is stained, for example with ink, or wet. The performance in fingerprinting is notably very poor in the case of wet fingers in acquisitions made by known sensors, such as, for example, two-dimensional capacitive or optical sensors with contact, or the non-contact optical sensors known as "2 dimensions% ", as mentioned in the articles by R. Cappelli et al. "Performance Evaluation of Fingerprint Verification Systems", IEEE Transactions on Pattern Analysis and Maching Intelligence, Vol. 28, No. 1, 2006, and LC Jain et al. "Intelligent Biometric Techniques in Fingerprint and Face Recognition", Chapter 2, Vol. 10, CRC Press, 1999. This performance degradation also occurs for non-contact biometric sensors such as OCT. FIG. 5 shows an experimental example of an OCT image representative of the external impression extracted in the traditional way with the presence of microdroplets on its surface, simulating the behavior of a finger under conditions of high humidity or transpiration. The intensity image of this example of a wet finger, shown in FIG. 6, contains white spots, due, among other things, to the presence of droplets that locally modify the optical properties of the finger, creating a parasitic lens effect for example. Known fingerprint readers to authenticate people thus prove to be non-operational under high humidity conditions, since no reliable image of the fingerprint of a wet finger can be provided. In the approach presented in the aforementioned article by A. Bossen et al., The acquisition is performed by finger contact on a glass plate, leading to a flattening of the finger. The image of the impression is then obtained by averaging the images in-face intensity, corresponding to the images in the XY plane according to the notations of Figure 3, contained in a small portion of the tomographic volume containing the interface epidermis / dermis, where is the internal footprint, shown in Figure 7 by the junction zone or "junction zone" in English. This method makes it possible to obtain a 2D texture image, denoted I (x, y), linked to the intensity information, where x and y are the coordinates of the pixels of the image. Image processing methods have been applied to the I (x, y) image to improve its contrast. In the case where the finger is not flattened during the acquisition, the image I (x, y) can be projected on the 3D surface of the internal cavity, obtained thanks to the phase measurements. The internal 3D imprint obtained with this method is visible in Figure 8. The performance of this approach in the context of biometrics is also not sufficient because of the degraded quality of the images obtained. There is a need to improve the quality of information on the external or internal surface morphology of biological materials from coherent optical tomography acquisition devices, in particular to extract and efficiently identify internal fingerprints as well as external fingerprints in difficult conditions. The invention aims to satisfy this need, and it achieves this through a method of extracting morphological characteristics from a sample of biological material, in particular fingerprints, in particular internal or external, using a tomography acquisition system. coherent optical apparatus providing a signal representative of the sample, wherein an image containing intensity information and an image containing phase information are formed from at least the signal delivered by the acquisition system and representative of the sample, in order to extract morphological characteristics from the sample. The image containing intensity information and the image containing phase information are not equivalent in terms of information content. Even if their qualities are comparable, the information they contain is complementary, and make it possible to facilitate and optimize the extraction of the morphological characteristics of the sample to be used. The method according to the invention can thus be used in the field of highly secure biometrics, aimed at detecting fraud in the identification of persons, by using in particular the internal fingerprint to compare it with the external fingerprint, or reliable biometric identification in difficult conditions, for example in the case of wet or dirty fingers, or in the case of an external fingerprint more or less erased. In the case of a wet finger, in the image containing the phase information, the backscattered intensity maxima are always located at the footprint, and not at the water layer or droplets. The image in phase is thus of much better quality, and can lead to a proper reconstruction of the 3D structure of the footprint, ensuring more robust biometric identification performance than that obtained from the intensity images, as obtained by the known biometric sensors. Exploiting the phase information, corresponding to the flight time of the light, makes it possible to compensate for the variability of the light scattering intensity involved in the properties of the image, and in particular the fact that they are sensitive to light. angle of incidence of the light beam with the normal of the surface of the sample to be studied. Image in phase The second peak of the intensity of an "A-scan" profile, numbered 2 in FIG. 4, is assigned to the internal fingerprint, or to the overlay / finger interface in the case of a fraud. It reflects the strong inhomogeneities of the skin at the level of the papillary dermis, a layer of skin located between the dermis and the epidermis, corresponding to a change in cellular organization, visible in Figure 7. Thus, in the same way as previously described, it is advantageous to reconstruct a 3D representation of the internal cavity by recovering the position of the second peak of greater reflectivity for each "A-scan" profile of the tomographic volume. A depth reflectivity profile of the light being established from the signal representative of the sample delivered by the acquisition system, this reflectivity profile having several peaks of maximum reflectivity, it can be determined, in order to form the image containing phase information, the position of a maximum reflectivity peak of said reflectivity profile, chosen according to the type of data to be extracted. The peak of interest for the external cavity preferably corresponds to the first peak of maximum reflectivity of the reflectivity profile, and the peak of interest for the internal cavity preferably corresponds to the second peak. Once the position of the peak of interest has been determined, a spatial filtering of the signal can be carried out, especially a bandpass filtering of the interferogram in the spatial domain, consisting at least of retaining the interferometric signal contained in a window centered on the peak of interest and predefined width, in particular the order of magnitude of the axial resolution of the acquisition system OCT. A transformation is then advantageously applied to this signal in order to obtain spectral information, in particular spectral information in intensity and in phase concerning the broadcast recorded at the air / finger interface in the case of an external fingerprint or at the epidermis / dermis interface in the case of an internal fingerprint, the transformation being in particular a Hilbert transform to obtain the complex interferometric signal followed by a Fourier transform. In order to obtain the desired phase information, the slope of the phase is advantageously calculated by a linear regression of the spectral dependence of the phase, obtained from the spectral information obtained by transforming the spatially filtered signal. In the case where the sample is a fingerprint, the reference used to measure the phase information is preferably the average envelope of the surface of the finger. This average envelope corresponds to the envelope surface of the finger without the grooves, as represented in FIG. 9. A 3D surface can be coded as a topographic image S (x, y), where at each (x, y) is associated a value depth, or preferably here, a value of flight time or phase. The average envelope, called Em (x, y), is then obtained by applying an averaging filter, in particular a 2D low-pass filter, on the topographic image S (x, y). Since the grooves are of higher spatial frequencies, they are advantageously eliminated during the filtering operation. An image of 2D textures P (x, y), called phase image, can be obtained by subtracting between S (x) and Em (x, y): P (x, y) = 5 (x, y) - Em (x, y). In this way, the reference of the flight time or phase measurements is no longer taken at the level of the sensor probe but at the level of the average envelope. Consequently, the resulting image advantageously represents not the phase values Φ, but rather their variations ΔΦ, which makes it possible to have a more contrasting texture image. The contrast of this texture image can be further improved by applying an adaptive histogram equalization and then a contrast adjustment using a sigmoid function, the medium of which is determined by the Otsu method, of considering that the image to binarize contains only two classes of pixels, the foreground and the background, and calculates the optimal threshold that separates these two classes so that their intra-class variance is minimal. In the same way as for the image I (x, y), the texture image P (x, y) can be projected onto the 3D surface of the internal cavity shown in FIG. 1 (a). The invention makes it possible to detect fraud by using overlays, by comparing the fingerprint associated with the first peak of maximum reflectivity with that associated with the second peak. If these fingerprints are different, there is fraud. Fusion of images in intensity and phase In a preferred embodiment of the invention, the image containing intensity information and the image containing phase information are merged to form a single image. To do this, the structure of each image respectively containing intensity information and phase information is analyzed in order to establish for each image a confidence card containing a quality value for each pixel, according to the neighboring pixels. The cards of confidence of the images are based in particular on the presence of a good contrast and on the local quality of the grooves present on the images, in the case of a fingerprint. Each pixel of the fusion image F between the image I containing intensity information and the image P containing phase information is advantageously derived from a linear combination of the values of the corresponding pixels of the two images, weighted by the quality values of trust cards: with for example and (x, y) the coordinates of a pixel, C; (x, y) la quality value of the pixel (x, y) of image I, with 0 <C / (x, y) <1, CP (x, y) the quality value of the pixel (x, y) of the image P, with 0 <Cp {x, y) <1, and Norm = C ^ xy) + CP (x, y). If Norm = 0, ocj = aP = 0.5 is preferably fixed. According to the fusion formula used, the values aI and aP can be expressed differently, the invention is not limited to a particular calculation for the values aj and aP. In a variant, the fusion image between the image containing intensity information and the image containing phase information is advantageously formed by retaining, for each pixel, the pixel of the image having the value of quality. higher : F (λλ = {Kx, y) if Cj (x, y)> Cp {x, y) lP (x, y) if Cj {x, y) <CP (x, y) 'The fusion image between the image containing intensity information and the image containing phase information is thus advantageously formed pixel by pixel, depending on the neighborhood of each pixel, thanks to the cards of confidence. In the case where the image considered is a fingerprint, the quality value of a pixel (CP (x, y) or Cfx, y) can be obtained from orientation field reliability maps. furrows of the fingerprint, or "orientation field reliability map" in English, as described in the articles by J. Zhou and J. Gu, "A Model-based for the computation of fmgerprint's orientation field" IEEE Transactions on Image Processing, flight. 13, no. 6, 2004, and MS Khalil, "Deducting fingerprint singular points using orientation field reliability," First Conference on Robot, Vision and Signal Processing, pp. 234-286, 2011. The groove orientation fields represent the path direction for each position in the footprint. They are calculated for each pixel of the fingerprint image, according to their neighborhood. The use of such orientation fields in fingerprint biometrics is known, for example in methods for improving fingerprint images such as that described in the article by L. Hong et al. "Fingerprint image enhancement: algorithm and performance evaluation", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 8, 1998. Reliability maps of orientation fields make it possible to evaluate the validity and reliability of the estimation of the orientation of the train paths. A low quality fingerprint image region can be characterized by the fact that the texture of the grooves is not apparent, the periodic structure characteristic of the grooves not being found there. In such areas, the orientation estimate is poor because there is no overriding orientation. Therefore, the value of reliability is low. Conversely, in highly structured fingerprint regions, the presence of a particular direction can be reliably estimated. The value of reliability for these regions is high. As explained in the articles by C. Sousedik et al. "Fingerprint Data Analysis Volumetry Using Optical Coherence Tomography", BIOSIG Conference, 2013, pp. 1-6, and C. Sousedik and C. Bush, "Quality of fingerprint scans captured using Optical Coherence Tomography", IJCB Conference, 2014, pp 1-8, the structure of the internal cavity can be quite inhomogeneous, unlike that of the external cavity, which is continuous, leading to an ambiguity on the position of the second peak of maximum reflectivity. Internal imprint can also vary greatly from one individual to another Detecting the position of the internal imprint by time of flight measurements can be tricky, as the interface is not necessarily well defined. In addition, the backscattering of light in the skin involves complex physical phenomena that are difficult to predict due to the interferences between the multiple waves backscattered by the biological structures. It is not obvious that, in a fingerprint, the peaks of the grooves correspond to maxima of reflectivity and the hollows to minima, or vice versa. The fusion of the images in phase and in intensity makes it possible to make the most of the information available on all the two images, and thus to significantly improve the quality of the final image obtained on the desired surface. For example, in the area of biometrics, this results in a significant improvement in identification performance based on the subcutaneous imprint, using known biometric identification algorithms. Location accuracy The location accuracy of peak reflectivity peaks partly in the quality of 3D and 2D images in phase. The location accuracy, different from the axial resolution, is a little known concept of the prior art, and little exploited in biomedical applications. Axial resolution is the minimum distance required between two scattering centers in order to be able to distinguish them correctly, and is only dependent on the spectral width of the light source. It can be measured from the width at half height of a peak associated with a single diffusion center, for example the first peak numbered 1. The location accuracy is advantageously related to the error in locating the maxima of the envelope of the different "A-scan" profiles. In order to evaluate the location accuracy, a statistical study is performed, consisting of simulating the peak associated with a single diffusion center, whose position is fixed during the simulation, taking into account the different noise contributions of the photo- Acquisition system detector, mainly thermal noise and shot noise that have distributions similar to white noise. This noise can have a greater or lesser impact, depending on its power, on the measured position of the maximum peak. The error made on the position can be evaluated by the difference between the maximum position of the simulated noisy "A-scan" profile and that of the "A-scan" reference profile used, previously known. We then define the location accuracy of the acquisition system by the standard deviation of this location error. The standard deviation is advantageously obtained from a large number of random draws of noisy "A-scan" profiles. Device According to another of its aspects, the invention relates to a device for extracting morphological characteristics from a sample of biological material, in particular fingerprints, in particular internal or external, comprising a coherent optical tomography acquisition system delivering a signal representative of the sample, the device being configured to form, from at least the signal delivered by the acquisition system and representative of the sample, an image containing intensity information and an image containing phase information , in order to extract morphological characteristics from the sample. In a preferred embodiment of the invention, the device is further configured to fuse the image containing intensity information and the image containing phase information to form a single image. The characteristics described above for the method according to the invention apply to the device. The field of view of the device, corresponding to the maximum spatial extent in the XY plane that can be recorded, can be extended, reaching for example 2 mm by 2 mm, or 4 mm 2. This makes it possible to obtain a large number of minutiae, in the case of the extraction of a fingerprint. The invention will be better understood on reading the following detailed description, non-limiting examples of implementation thereof, and on examining the appended drawing, in which: FIG. 1, previously described, represents a volume image obtained by a coherent optical tomography acquisition system on a finger, - Figures 2 (a) and 2 (b), previously described, represent, respectively, the intensity image and the image processed according to the prior art, from the volume of FIG. 1, FIGS. 3 (a) and 3 (b), previously described, represent, respectively, the acquisition of a digital fingerprint by tomography and the "A-scan" profile obtained as a function of the flight time of the light, - Figure 4, previously described, represents the intensity of an "A-scan" profile as a function of the depth, - Figure 5, previously described, illustrates the presence of drops of water on the surface of a finger, FIG. 6, previously described, represents the intensity image of the wet finger of FIG. 5, obtained by OCT according to the prior art; FIG. 7, previously described, represents the cross section of a tomographic volume obtained according to FIG. the prior art, - Figure 8, previously described, represents the 3D internal fingerprint obtained from the volume of Figure 7 obtained by a method of the prior art, - Figure 9, previously described, illustrates the envelope FIG. 10 represents an OCT device according to the invention; FIG. 1 (a) represents the image in phase and FIG. 1 (b) represents the image in intensity. of the internal imprint, obtained by implementing the method according to the invention, from the tomographic volume of FIG. 1, - FIGS. 12 (a) and 12 (b) represent, respectively, the image in phase and the processed image of the internal imprint obtained, according to the invention, From the tomographic volume of FIG. 1, FIG. 13 illustrates a comparison between two in-phase images obtained according to the invention, FIGS. 14 (a) and 14 (b) represent, respectively, the merged image of the information in FIG. phase and intensity, and the image processed, obtained according to the invention, - Figure 15 shows internal fingerprints, the associated minutiae, and the reliability map of the orientation of the grooves for images in phase, in intensity and for the fusion of the two, obtained according to the invention, - Figure 16 is a graph showing performance curves obtained by the implementation of the method according to the invention, - Figure 17 is a graph representing the probability densities of client scores and impostor scores, using a database of internal fingerprint images extracted according to the invention; - FIG. 18 represents an intensity image of a fingerprint in the case of a moistened finger, FIGS. 19 (a) and 19 (b) show, respectively, an image after merging a fingerprint in the case of the humidified finger of FIG. 17, obtained according to the invention, and the corresponding phase image; FIG. 20 is a graph showing the location error according to the invention as a function of the signal-to-noise ratio and the axial resolution; FIG. 21 is a graph presenting comparative performance curves, and FIG. comparison of images obtained from a wet finger, with sensors according to the prior art and according to the invention. An OCT device 10 for carrying out the invention is shown in FIG. 10. This device 10 comprises a scanning source 11 configured to scan the sample at different depths, a mirror 12, a partial mirror 13, and an interferometer by Michelson 14. Each wavelength scan, or "A-scan" in English, produces interference fringes from the reflections of the sample at different depths. As described above, a 3D image in phase of the internal cavity is obtained, according to the invention, from the tomographic volume of FIG. 1, and is represented in FIG. 11 (a). The intensity image of the same internal footprint, shown in Figure 11 (b), shows unusable areas with very low contrast. These zones are random because they depend inter alia on the local diffusion properties of the biological tissue but also on the angle of incidence of the probe of the coherent optical tomography acquisition system, especially in the case of a contactless measurement where the measurement is not reproducible. Fig. 12 (a) shows a raw phase image, Fig. 12 (b) showing the corresponding image delivered at the output of the "matcher". These images compare to the intensity images presented in Figure 2, previously described. The positions of the unusable areas of the image of Figure 12 (b) are different from that of Figure 2 (b). Thus, using both the characteristics extracted from the image in intensity and those of the image in phase makes it possible to improve the identification of the person corresponding to this fingerprint. FIG. 13 (a) represents a 3D image of the external imprint on which the information of the phase, projected from the tomographic volume of FIG. 1, according to the invention, has been projected. The strong values, shown in white, correspond to a short flight time between the OCT acquisition system probe and the footprint surface, and the low intensity values, shown in black, correspond to a flight time. longer. This example does not make it possible to obtain good impression images directly, since the furrows can not be discerned properly. This is because the reference for measuring the flight time, ie the probe of the acquisition system OCT, is not located at an equal distance for all the points of the surface of the finger. In order to obtain a better contrast, as described above, the average envelope of the surface of the finger is taken as a reference for the flight time. As seen in Figure 13 (b), representing the 3D footprint on which the delta-phase information was projected, ie the phase variations, corresponding to the relevant information to have impression images well contrasted, the furrows are in this case clearly visible. As previously described, the image containing intensity information and the image containing phase information are merged to form a single image, using the confidence cards of each image delivering pixel-by-pixel quality values. An image formed by the fusion of the intensity image of FIG. 2 (a) and the in-phase image of FIG. 12 (a) is visible in FIG. 14 (a), the image corresponding to the output the "matcher" being shown in Figure 14 (b). Thanks to the fusion, the resulting image is of much better quality, the unusable areas having almost disappeared. Figure 15 shows internal fingerprints, the associated minutiae, and the groove orientation reliability map for phased images, in intensity and for the fusion of the two. Complementary images were considered in order to use the known representation of fingerprints. The images on the first line correspond to the flattened internal impression images in the three representations. The images in the second line represent the same images after pretreatment and binarization steps, the Verifinger software, developed by Neurotechnology, having been used in the example described. In these images, the minutiae extracted from the binarized image, represented by the black dots, exploited by the matchers, are used for the identification step by matching the minutiae of two fingerprint images. digitalis. For both phase and intensity representations, the image quality is poor for some regions, as represented by black circles. For such regions, the fingerprint grooves are not visible. As a result, the quality of these regions is not sufficient to ensure correct minutia detection, as illustrated by the white holes in the binarized images. In the representations of the reliability maps of the furrow orientation fields, the dark pixels correspond to the low values of reliability while the bright pixels correspond to the high values. In intensity and phase representations, low values of reliability are associated with areas of poor quality. It should be noted that, preferably and in the example described, the problematic regions are not located at the same place in the two representations. As visible in the last column of FIG. 15, the internal fingerprint image after the fusion of the intensity and phase images is of much better quality, this image having been reconstructed by choosing the best regions of the two representations. The structure of the grooves is better preserved throughout the image. The regions with holes in the binarized image have disappeared, which leads to a more robust minutia detection. The reliability map for the post-merge image is a good illustration of the improvement in overall image quality, with clearer and more expansive areas. FIG. 16 represents a comparison of the performances obtained for results from the various representations, on a basis comprising one hundred fingers, in terms of false acceptance rate FAR as a function of the rate of false FRR discharges. These detection error tradeoff (DET) curves, giving the false acceptance rate as a function of the false rejection rate, are known to evaluate the performance of the biometric systems. The more these curves are located downwards, the better the performances, a minimum false rejection rate being sought for a given false acceptance rate. The small dashed curve corresponds to the reference curve, obtained with an in-phase image of the external cavity, this fingerprint being by nature easily accessible to the various sensors. The broad dotted curve and the broken dashed curve respectively correspond to the curves of the internal impressions extracted by the intensity and phase images, and are at approximately the same level. For a false acceptance rate of 10'3, for example, the false rejection rate is degraded by a factor of 2-3 compared to the false rejection rate associated with the reference curve. This result demonstrates the difficulty of accessing the internal footprint. The curve in solid lines is calculated from the images after fusion. For this same false acceptance rate, the false rejection rate is reduced by a factor of approximately 3-4 compared to the one associated with the curves corresponding to the phase and intensity internal cavity images. In another example, for a false acceptance rate of 0.01%, the false rejection rate is about 7% for post-merge images, compared to 26% for phased images and 20% for intensity images. For a false acceptance rate of 0.1%, the false rejection rate is about 4% for post-merge images, compared to 20% for phased images and 14% for intensity images. It should also be noted that performance is better with post-merge images than with in-phase external footprint images, with internal footprints better preserved than external footprints. Probability densities of client and impostor scores are shown in FIG. 17, obtained on a database of internal impression images extracted according to the invention, comprising 102 different fingers from 15 individuals, each finger having been acquired. time. The internal impression images in the three representations, intensity, phase and after fusion, were extracted from the tomographic volumes. For verification tests, each internal impression image was compared with all other images in the database, resulting in a total of 166056 fingerprint comparisons. The comparison of two images from the same finger is called customer correspondence or "genuine matching" in English, and the comparison of two images from different fingers is called impostor matching. Similarity scores are calculated with NBIS (NIST Biometrics Image Software). In this example, the MINDTCT algorithm extracts the minutiae of a fingerprint image and the "matcher" BOZORTH3 returns the similarity score between two images. Two densities of probability of scores, customer density and impostor density, are obtained, the ability to discern these densities to quantify the performance of a biometric system. The final decision is made by comparing the similarity score obtained at a threshold, chosen according to the score densities and the desired performances. As client and impostor densities overlap, errors of false rejection or false acceptances are made during decision making. Verification performance is finally evaluated using the performance curves obtained by varying the matching threshold. The performances obtained demonstrate that the internal imprint allows identification of persons with performances comparable to those obtained by the known biometric readers on the external fingerprint of a dry finger. Identifying people with dirty or wet fingers is also more effective than is possible using known biometric systems. FIG. 21 presents a comparison of the performances obtained by the use of the internal cavity extracted by fusion according to the invention with those obtained by the use of the external cavity extracted by the sensors according to the prior art, for example a 2D capacitive sensor. Similar FRRs are obtained for a fixed FAR. By extension, in the case of wet fingers, the performance obtained using the internal cavity extracted by fusion according to the invention are better than those obtained by the sensors according to the prior art, for example a 2D capacitive sensor. Indeed, the performance of the capacitive 2D sensor in the wet case is necessarily worse than those presented in the normal case, illustrated by the dashed curve of Figure 21. Figures 18 and 19 show fingerprints obtained in the case of wet fingers. As shown in Figure 18, the intensity image shows very low contrast zones in the wetlands. The corresponding images in phase and after fusion, obtained according to the invention, are represented respectively in FIGS. 19 (b) and 19 (a). The image in phase is of better quality than the image in intensity and directly exploitable, and the image after fusion is of very good quality, presenting hardly any defects which can affect the identification of the imprint. Figs. 22 (a) and 22 (b) show fingerprint images of a wet finger for two known 2D sensors, respectively an optical sensor and a capacitive sensor. Black stains due to excessive finger humidity are visible in the images. These tasks significantly degrade the quality of the images, and therefore decrease authentication performance. The corresponding binarized images show that the stained areas have not been recognized in the fingerprint. In comparison, the in-phase image obtained according to the invention, shown in FIG. 22 (c), is of much better quality. FIG. 20 represents the standard deviation of the location error as a function of the signal-to-noise ratio SNR, defined as the ratio between the level of peak intensity and that of the background noise, as described previously with reference to FIG. 4, for different axial resolutions, from 5 pm to 25 pm. For a signal-to-noise ratio of 50 dB, corresponding to a typical value for backscattering at the air / skin interface, the location error is estimated between 60 nm and 350 nm. The location error is well below the axial resolution of the acquisition system, evaluated at about 10 pm in the example under consideration. The location accuracy is also well below the order of magnitude of the wavelength of the light source used, approximately equal to 1300 nm. Considering, according to the ergodicity principle, that the overall statistics of the simulated "A-scan" profiles are equivalent to the spatial statistics, it appears that the contribution of the noise during the extraction of the 3D surface of the fingerprints is negligible. compared to the average depth of a groove, approximately equal to 50 pm. The invention thus makes it possible to distinguish correctly by phase measurements the hollows and the vertices of the grooves of the fingerprints. In addition, even in the case of lower instrumental performances, that is to say for a low axial resolution, it is still possible to extract the grooves of the fingerprint with great precision. The invention can make it possible to provide an OCT biometric sensor that performs well in imaging, at a lower cost than known sensors. The invention is not limited to the examples which have just been described. 3D fingerprint identification requires more complex tools to implement than traditional 2D image matching tools, as described in the article by A. Kumar and C. Kwong, "Toward Contacless, Low- Cost and Acurrate 3D Fingerprint Identification ", CVPR IEEE Conference, 2013, pp. 3438-3443. In order to be able to reuse already existing tools, the 3D fingerprints obtained according to the invention are advantageously transformed into 2D images by means of a texture matching method on 3D surfaces, close to that described in the article of the invention. G. Zigelman et al. "Texture mapping using surface flattening via multidimensional scaling," IEEE Transactions on Visualization and Computer Graphics, Vol 8, No. 2, 2002. This method relies on the use of the "FastMarching" algorithm in English, described in R. Kimmel and JA Sethian, "Computing geodesy paths on manifolds" applied mathemathics, Vol 95, pp. 8431-8435, 1998, and the MDS algorithm for "Multidimensional scaling" in English. for the flattening of a 3D fingerprint, the "Fast Marching" algorithm is used to calculate geodetic distances from a triangular mesh of its mean envelope, ie the 3D surface of the footprint without furrows The "Multidimensional Scaling" algorithm is applied to transform the meshed surface 31) into a 2D image, under the constraint of minimizing the distortions of the geodesic distances. minutiae, which is particularly interesting in the context of biometrics. Different images of textures can be projected on this flattened 2D surface, for example the texture image in intensity I (x, y), the image of textures in phase P (x, y), or the image of textures in fusion F (x, y). The invention is however not limited to a particular type of method for transforming 3D images into 2D images. In addition to the biometrics sector, the invention can be used in the morphological study and analysis of biological materials, particularly in the medical field, for example for medical imaging requiring the study of the morphology of material surfaces. Biologicals located deep beneath the skin. The invention can be used to detect another fraud technique, consisting of erasing the external fingerprint, which renders inoperative any authentication technique by the external fingerprint. If you want to detect fraud, rather than authenticate the person, seeing no external fingerprint but seeing an internal fingerprint can trigger an indicator of possible fraud.
权利要求:
Claims (15) [1" id="c-fr-0001] A method of extracting morphological characteristics from a sample of biological material, in particular fingerprints, in particular internal or external, using a coherent optical tomography acquisition system delivering a signal representative of the sample, in which process an image containing intensity information and an image containing phase information are formed from at least the signal supplied by the acquisition system and representative of the sample, in order to extract morphological characteristics from the sample. [2" id="c-fr-0002] The method of claim 1, wherein the image containing intensity information and the image containing phase information are merged to form a single image. [3" id="c-fr-0003] 3. Method according to claim 1 or 2, wherein, a profile of reflectivity of the light in depth being established from the signal representative of the sample delivered by the acquisition system, this reflectivity profile having several peaks of reflectivity. maximum, in order to form the image containing phase information, the position of a maximum reflectivity peak of said reflectivity profile, according to the type of data to be extracted, is determined. [4" id="c-fr-0004] 4. Method according to the preceding claim, wherein, once the position of the peak of interest determined, a spatial filtering of the signal is performed, including a bandpass filter of the interferogram in the spatial domain, consisting at least to remember the interferometric signal contained in a window centered on the peak of interest and of predefined width, in particular of the order of magnitude of the axial resolution of the acquisition system. [5" id="c-fr-0005] 5. Method according to the preceding claim, wherein a transformation is applied to the spatially filtered signal in order to obtain spectral information, in particular spectral information in intensity and phase concerning the broadcast recorded at the air / finger interface in the case of an external fingerprint or at the epidermis / dermis interface in the case of an internal fingerprint, the transformation notably being a Hilbert transform to obtain the complex interferometric signal followed by a Fourier transform. [6" id="c-fr-0006] 6. Method according to the preceding claim, wherein, in order to obtain the phase information necessary for the formation of the image in phase, the slope of the phase is calculated by a linear regression of the spectral dependence of the phase obtained. from the spectral information obtained by transforming the spatially filtered signal. [7" id="c-fr-0007] 7. Method according to any one of claims 3 to 6, wherein the peak of interest for the external cavity corresponds to the first peak of maximum reflectivity of the reflectivity profile, and the peak of interest for the internal cavity corresponds at the second peak. [8" id="c-fr-0008] A method according to any one of the preceding claims, wherein, in the case where the sample is a fingerprint, the reference used to measure the phase information is the average envelope of the surface of the finger. [9" id="c-fr-0009] The method of any one of claims 2 to 8, wherein, to merge the image containing intensity information and the image containing phase information, the structure of each image is analyzed to establish for each image a trusted card containing a quality value for each pixel, according to the neighboring pixels. [10" id="c-fr-0010] The method according to the preceding claim, wherein each pixel of the fusion image between the image containing intensity information and the image containing phase information is derived from a linear combination of the values of the corresponding pixels of the images. two images, weighted by the quality values of the confidence cards. [11" id="c-fr-0011] The method of claim 9, wherein the merging image between the image containing intensity information and the image containing phase information is formed by retaining, for each pixel, the pixel of the image having the highest quality value. [12" id="c-fr-0012] The method according to any one of claims 9 to 11, wherein, in the case where the sample is a fingerprint, the quality value of a pixel is obtained from orientation field reliability maps. furrows of the impression. [13" id="c-fr-0013] 13. Apparatus for extracting morphological characteristics from a sample of biological material, in particular fingerprints, in particular internal or external, comprising a coherent optical tomography acquisition system delivering a signal representative of the sample, the device being configured to form, from at least the signal output from the acquisition system and representative of the sample, an image containing intensity information and an image containing phase information, in order to extract morphological characteristics from the 'sample. [14" id="c-fr-0014] Apparatus according to the preceding claim, further configured to fuse the image containing intensity information and the image containing phase information to form a single image. [15" id="c-fr-0015] 15. Device according to claim 13 or 14, having a field of view, corresponding to the maximum spatial extent in the X-Y plane that can be recorded, extended, notably reaching 2 mm by 2 mm, or 4 mm2.
类似技术:
公开号 | 公开日 | 专利标题 FR3041423B1|2019-10-04|METHOD FOR EXTRACTING MORPHOLOGICAL CHARACTERISTICS FROM A SAMPLE OF BIOLOGICAL MATERIAL US8687856B2|2014-04-01|Methods, systems and computer program products for biometric identification by tissue imaging using optical coherence tomography | US9361518B2|2016-06-07|Methods, systems and computer program products for diagnosing conditions using unique codes generated from a multidimensional image of a sample EP2902943B1|2016-10-12|Method for validating the use of a real finger as a support for a fingerprint Dubey et al.2008|Simultaneous topography and tomography of latent fingerprints using full-field swept-source optical coherence tomography Liu et al.2010|Biometric mapping of fingertip eccrine glands with optical coherence tomography US9384404B2|2016-07-05|Apparatus and method for capturing a vital vascular fingerprint FR3084945A1|2020-02-14|METHOD OF ANALYSIS OF A FOOTPRINT EP2494520A1|2012-09-05|Method and device for analysing hyper-spectral images Aum et al.2015|Live acquisition of internal fingerprint with automated detection of subsurface layers using OCT Chang et al.2011|Fingerprint spoof detection using near infrared optical analysis Liu et al.2020|A flexible touch-based fingerprint acquisition device and a benchmark database using optical coherence tomography Sun et al.2020|Synchronous fingerprint acquisition system based on total internal reflection and optical coherence tomography Moolla et al.2019|Optical coherence tomography for fingerprint presentation attack detection Akhoury et al.2015|Extracting subsurface fingerprints using optical coherence tomography FR3066294B1|2019-06-28|IMPRESSION CAPTURE DEVICE Khutlang et al.2016|High resolution feature extraction from optical coherence tomography acquired internal fingerprint Khutlang et al.2015|Segmentation of forensic latent fingerprint images lifted contact-less from planar surfaces with optical coherence tomography FR3040815A1|2017-03-10|METHOD OF CHARACTERIZING A MATERIAL BY TAVELURE ANALYSIS KR101780231B1|2017-09-21|Apparatus and method for sensing a fingerprint using photoacoustic da Costa et al.2016|Biometric identification with 3D fingerprints acquired through optical coherence tomography. WO2011066366A1|2011-06-03|Methods, systems and computer program products for diagnosing conditions using unique codes generated from a multidimensional image of a sample EP3726423A1|2020-10-21|Device and method for biometric acquisition and processing EP3206160B1|2018-11-21|Method for biometric processing of images EP2827282A1|2015-01-21|Method for verifying the veracity of a finger or a palm
同族专利:
公开号 | 公开日 MX364812B|2019-05-08| FR3041423B1|2019-10-04| EP3147822B1|2021-03-03| CN107016326A|2017-08-04| BR102016021860A2|2017-03-28| US20170083742A1|2017-03-23| MX2016012043A|2017-03-30| ZA201606537B|2019-09-25| KR20170035343A|2017-03-30| EP3147822A1|2017-03-29| US10275629B2|2019-04-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20140241596A1|2013-02-28|2014-08-28|The Regents Of The University Of California|Apparatus and Method for Capturing a Vital Vascular Fingerprint| US20150168127A1|2013-12-13|2015-06-18|Nidek Co., Ltd.|Optical coherence tomography device| RU2059979C1|1993-06-23|1996-05-10|Владимир Николаевич Бичигов|Method for recognition of prints of papillary patterns| DE19818229A1|1998-04-24|1999-10-28|Hauke Rudolf|Contactless method for hand- and fingerprint recognition| US8845150B2|2009-12-04|2014-09-30|Production Resource Group Llc|Moving light with follow spot|CN105335713A|2015-10-28|2016-02-17|小米科技有限责任公司|Fingerprint identification method and device| CN107563364B|2017-10-23|2021-11-12|清华大学深圳研究生院|Sweat gland-based fingerprint authenticity identification method and fingerprint identification method| CN109247911B|2018-07-25|2021-04-06|浙江工业大学|Synchronous acquisition system for multi-mode finger features| CN109063681A|2018-08-28|2018-12-21|哈尔滨理工大学|Direction of fingerprint information acquisition method based on fingerprint phase gradient| CN109377481A|2018-09-27|2019-02-22|上海联影医疗科技有限公司|Image quality evaluating method, device, computer equipment and storage medium| KR20200070878A|2018-12-10|2020-06-18|삼성전자주식회사|Method and apparatus for preprocessing fingerprint image| CN110297219B|2019-07-11|2021-09-14|北京遥感设备研究所|FPGAcoherence test method based on Ethernet data transmission|
法律状态:
2016-07-29| PLFP| Fee payment|Year of fee payment: 2 | 2017-03-24| PLSC| Publication of the preliminary search report|Effective date: 20170324 | 2017-08-22| PLFP| Fee payment|Year of fee payment: 3 | 2017-10-20| TP| Transmission of property|Owner name: SAFRAN IDENTITY & SECURITY, FR Effective date: 20170914 | 2018-08-22| PLFP| Fee payment|Year of fee payment: 4 | 2019-08-20| PLFP| Fee payment|Year of fee payment: 5 | 2020-05-01| CA| Change of address|Effective date: 20200325 | 2020-05-01| CD| Change of name or company name|Owner name: IDEMIA IDENTITY AND SECURITY, FR Effective date: 20200325 | 2020-08-26| PLFP| Fee payment|Year of fee payment: 6 | 2021-08-19| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1558929A|FR3041423B1|2015-09-22|2015-09-22|METHOD FOR EXTRACTING MORPHOLOGICAL CHARACTERISTICS FROM A SAMPLE OF BIOLOGICAL MATERIAL| FR1558929|2015-09-22|FR1558929A| FR3041423B1|2015-09-22|2015-09-22|METHOD FOR EXTRACTING MORPHOLOGICAL CHARACTERISTICS FROM A SAMPLE OF BIOLOGICAL MATERIAL| MX2016012043A| MX364812B|2015-09-22|2016-09-15|Method for extracting morphological characteristics from a sample of biological material.| EP16189504.0A| EP3147822B1|2015-09-22|2016-09-19|Method for extracting morphological characteristics of a fingerprint| ZA2016/06537A| ZA201606537B|2015-09-22|2016-09-21|Method for extracting morphological characteristics from a sample of biological material| CN201610843050.5A| CN107016326A|2015-09-22|2016-09-22|From the method for the sample extraction morphological feature of biomaterial| KR1020160121596A| KR20170035343A|2015-09-22|2016-09-22|Method for extracting morphological characteristics from a sample of biological material| US15/272,767| US10275629B2|2015-09-22|2016-09-22|Method for extracting morphological characteristics from a sample ofbiological material| BR102016021860A| BR102016021860A2|2015-09-22|2016-09-22|method and device for extracting morphological characteristics from a biological material sample and method for detecting fraud using overlays| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|